-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor/v2 #112
base: main
Are you sure you want to change the base?
Refactor/v2 #112
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I tried running
yarn dune
,yarn issue
andyarn userTx
for my own address only for theubiquity/ubiquity-dollar
repo which generated the necessaryjson
files (as expected). Then I ranyarn all
which keeps throwing an errorthis should not happen
. There's no such string/error in the codebase so it seems that this error is produced by some package. - Pls create your own supabase instance, run these migrations, uncomment these lines and make sure the permits are saved in a DB as expected.
this.websocket.onmessage = (messageEvent: { data: string }) => {
const data = messageEvent.data;
const result = JSON.parse(data);
if (result.id != null) {
...
} else {
let error: Error = null;
if (result.error) {
} else {
error = new Error("unknown error");
}
}
} else if (result.method === "eth_subscription") {
// Subscription...
} else {
console.warn("this should not happen");
}
};
|
@rndquu what should be done
Because it first fetches and filters the current DB it can be re-run any number of times and it'll populate the DB only with newly found permits, so it would be an effective solution for ubiquity/audit.ubq.fi#12 that would be gas free and there is far more data available to improve things further like pinning permits to issues etc. Thankfully actions can support the length of time it takes to run this collection of scripts so this could either be a standalone cron wf or converted into a plugin. I had to be explicit with the row ids because it was trying to insert with single figure IDs automatically without defining them when the highest is 111 rn, I'm not sure if that's going to be a problem in other codebases. The claimed vs unclaimed, I didn't want to jump to conclusions with things but we do need to consider that multiple permit comments is not an uncommon thing. Neither is the comment being quoted, especially by pav and both of these things would produce duplicate counts. Then there are the permits that are buried out there that are actually unspent. I went through the |
This is where I had left things. The script just needs ran by a someone with access to the core DB. They need to add another token as-of when I last touched this. Would you guys prefer that I "re-open" this and spend a day or so trying to improve it further, or are we happy to go ahead or what should we do with this PR exactly? |
Resolves #109, #106
A complete overhaul was required as mentioned here
I do not have private repo access so I cannot confirm whether those repos would be tallied if the script is ran by someone who does but I'm sure it would.
More than half of the found data has has been verified and connected with it's onchain counterpart although there are still ~350 without txhash which is a staggering amount. It's still unclear to me whether or not it's user error, skill issue or what but I cannot get the number down lower than that. With that said I am confident in the my efforts to make it as verifiable as possible given the challenges of it.
This discrepancy is clearer when comparing the
claimed
andunclaimed
leaderboards. I have suggested previously to bundle these unclaimed txs and invalidate all of them and either create one permit for the total user amount invalidated or create multiple depending on total value. This would be made simple using the output of this tool.Actually pushing the output to my DB is near impossible given the RLS constraints. I did try to seed my local DB with the prod data but had no luck with it, which makes testing that aspect difficult but if the output data matches the schema it should go right through.
Some edge cases that I have tried to either resolve or side step completely are: